47 research outputs found

    Improving novelty detection using the reconstructions of nearest neighbours

    Get PDF
    We show that using nearest neighbours in the latent space of autoencoders (AE) significantly improves performance of semi-supervised novelty detection in both single and multi-class contexts. Autoencoding methods detect novelty by learning to differentiate between the non-novel training class(es) and all other unseen classes. Our method harnesses a combination of the reconstructions of the nearest neighbours and the latent-neighbour distances of a given input's latent representation. We demonstrate that our nearest-latent-neighbours (NLN) algorithm is memory and time efficient, does not require significant data augmentation, nor is reliant on pre-trained networks. Furthermore, we show that the NLN-algorithm is easily applicable to multiple datasets without modification. Additionally, the proposed algorithm is agnostic to autoencoder architecture and reconstruction error method. We validate our method across several standard datasets for a variety of different autoencoding architectures such as vanilla, adversarial and variational autoencoders using either reconstruction, residual or feature consistent losses. The results show that the NLN algorithm grants up to a 17% increase in Area Under the Receiver Operating Characteristics (AUROC) curve performance for the multi-class case and 8% for single-class novelty detection

    Learning to detect radio frequency interference in radio astronomy without seeing it

    Get PDF
    Radio Frequency Interference (RFI) corrupts astronomical measurements, thus affecting the performance of radio telescopes. To address this problem, supervised segmentation models have been proposed as candidate solutions to RFI detection. However, the unavailability of large labelled datasets, due to the prohibitive cost of annotating, makes these solutions unusable. To solve these shortcomings, we focus on the inverse problem; training models on only uncontaminated emissions thereby learning to discriminate RFI from all known astronomical signals and system noise. We use Nearest-Latent-Neighbours (NLN) - an algorithm that utilises both the reconstructions and latent distances to the nearest-neighbours in the latent space of generative autoencoding models for novelty detection. The uncontaminated regions are selected using weak-labels in the form of RFI flags (generated by classical RFI flagging methods) available from most radio astronomical data archives at no additional cost. We evaluate performance on two independent datasets, one simulated from the HERA telescope and another consisting of real observations from LOFAR telescope. Additionally, we provide a small expert-labelled LOFAR dataset (i.e., strong labels) for evaluation of our and other methods. Performance is measured using AUROC, AUPRC and the maximum F1-score for a fixed threshold. For the simulated data we outperform the current state-of-the-art by approximately 1% in AUROC and 3% in AUPRC for the HERA dataset. Furthermore, our algorithm offers both a 4% increase in AUROC and AUPRC at a cost of a degradation in F1-score performance for the LOFAR dataset, without any manual labelling

    Difference Field Estimation for Enhanced 3-D Texture Segmentation

    No full text
    The optimization problem of finding the best match for a thin-plate block of multi-texture 3-D data in a supervised framework is studied in this paper. The textures are modelled as realizations of Gaussian Markov Random Fields (GMRFs)on 3-D lattices. The classification of the central point of the data block is performed by calculating the class probability mass function (p.m.f.s) for the block given the different texture models. The Kullback-Leibler measure is proposed for the minimization of the difference between the p.m.f.s distances of the The three-dimensional (3-D) segmentation of volumetric imagery poses the challenge of estimation and compensation for the existing inter-slice difference within a multi-texture 3-D data. In this paper we propose a novel method to identify the difference field by Kullback-Leibler minimization of the distance between the class probability mass functions (p.m.f.s), calculated at thin-plate 3-D blocks of data, centered at the points of interest. and fast FFT-based technique is presented for calculation of the probability density function (p.d.f.) of the data given the model. This facilitates the calculation of the classification p.m.f.s. in a supervised framework. The estimated difference field is used to enhance the performance of a computational-volume based 3-D GMRF segmentation algorithm. The performance of the overall method is illustrated with a simulation study of mosaic of synthetic 3-D textures and MRI images of human brain.

    Does prospect theory explain the disposition effect?

    Full text link
    The disposition effect is the observation that investors hold winning stocks too long and sell losing stocks too early. A standard explanation of the disposition effect refers to prospect theory and in particular to the asymmetric risk aversion according to which investors are risk averse when faced with gains and risk-seeking when faced with losses. We show that for reasonable parameter values the disposition effect can however not be explained by prospect theory as proposed by Kahneman and Tversky. The reason is that those investors who sell winning stocks and hold loosing assets would in the frst place not have invested in stocks. That is to say the standard prospect theory argument is sound ex-post, assuming that the investment has taken place, but not ex-ante, requiring that the investment is made in the first place

    Image Based Classification of Slums, Built-Up and Non-Built-Up Areas in Kalyan and Bangalore, India

    Get PDF
    Slums, characterized by sub-standard housing conditions, are a common in fast growing Asian cities. However, reliable and up-to-date information on their locations and development dynamics is scarce. Despite numerous studies, the task of delineating slum areas remains a challenge and no general agreement exists about the most suitable method for detecting or assessing detection performance. In this paper, standard computer vision methods–Bag of Visual Words framework and Speeded-Up Robust Features have been applied for image-based classification of slum and non-slum areas in Kalyan and Bangalore, India, using very high resolution RGB images. To delineate slum areas, image segmentation is performed as pixel-level classification for three classes: Slums, Built-up and Non-Built-up. For each of the three classes, image tiles were randomly selected using ground truth observations. A multi-class support vector machine classifier has been trained on 80% of the tiles and the remaining 20% were used for testing. The final image segmentation has been obtained by classification of every 10th pixel followed by a majority filtering assigning classes to all remaining pixels. The results demonstrate the ability of the method to map slums with very different visual characteristics in two very different Indian cities
    corecore